首页> 外文OA文献 >Visual localisation and individual identification of Holstein Friesian cattle via deep learning
【2h】

Visual localisation and individual identification of Holstein Friesian cattle via deep learning

机译:深度学习对荷斯坦黑白花牛的视觉定位和个体识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper, we demonstrate that computer vision pipelines utilising deep neural architectures are well-suited to perform automated Holstein Friesian cattle detection as well as individual identification in agriculturally relevantsetups. To the best of our knowledge, this work is the first to apply deep learning to the task of automated visual bovine identification. We show that off-the-shelf networks can perform end-to-end identification of individuals in top-down still imagery acquired from fixed cameras. We then introduce a video processing pipeline composed of standard components to efficiently process dynamic herd footage filmed by Unmanned Aerial Vehicles (UAVs). We report on these setups, as well as the context, training and evaluation of their components. We publish alongside new datasets: FriesianCattle2017 of in-barn top-down imagery, and AerialCattle2017 of outdoor cattle footage filmed bya DJI Inspire MkI UAV. We show that Friesian cattle detection and localisation can be performed robustly with an accuracy of 99.3% on this data. We evaluate individual identification exploiting coat uniqueness on 940 RGB stills taken after milking in-barn (89 individuals, accuracy = 86.1%). We also evaluate identification via a video processing pipeline on 46,430 frames originating from 34 clips (approx. 20 s length each) of UAV footage taken during grazing(23 individuals, accuracy = 98.1%). These tests suggest that, particularly when videoing small herds in uncluttered environments, an application of marker-less Friesian cattleidentification is not only feasible using standard deep learningcomponents – it appears robust enough to assist existing tagging methods.
机译:在本文中,我们证明了利用深度神经网络架构的计算机视觉管道非常适合于执行自动化的荷斯坦黑白花牛检测以及在农业相关设置中的个体识别。据我们所知,这项工作是第一个将深度学习应用于自动视觉牛识别任务的工作。我们显示,现成的网络可以对从固定摄像机获取的自上而下的静态图像中的个人执行端到端识别。然后,我们介绍由标准组件组成的视频处理管道,以有效处理由无人机(UAV)拍摄的动态牧群镜头。我们报告这些设置,以及它们的组成部分的上下文,培训和评估。我们与新的数据集一起发布:谷仓内自上而下图像的FriesianCattle2017和DJI Inspire MkI无人机拍摄的室外牛素材的AeroCattle2017。我们显示弗里斯兰牛的检测和定位可以鲁棒地执行,以该数据为99.3%的准确性。我们对在牛舍内挤奶后拍摄的940个RGB静止图像进行涂层识别的独特性来评估个体识别(89个个体,准确性= 86.1%)。我们还通过视频处理流水线对46,430帧的视频评估进行评估,这些视频源自放牧期间拍摄的34个片段(每个片段约20 s长度)的UAV镜头(23个人,准确性= 98.1%)。这些测试表明,尤其是在整洁的环境中对小群进行视频拍摄时,无标记弗里斯兰牛群识别的应用不仅可以使用标准的深度学习组件来实现,而且其鲁棒性足以辅助现有的标记方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号